Reading list for Sun, Jan.21, 2024

Total: 708 from which: 3 suggested today and 420 expired

Today's reading list ±

3 links selected from 288 today Sun, Jan.21, 2024
  1. Coffee Break Scroll: Coffee Around The World — Google Arts & Culture

    From a cup of mud to a macchiato, coffee puts a spring in everyone's step ... (2435 chars. See body)


    From a cup of mud to a macchiato, coffee puts a spring in everyone's step

    Oggetto Banale Coffee Maker (1980) by Alessandro Mendini in collaboration with Daniela Puppa, Paola Navone, and Franco RaggiMuseum of Design Atlanta (MODA)

    Pour yourself a cup of java and scroll to learn about one of the world's most popular beverages.

    Exposição Conhecendo o Café (2013) by Museu do CaféMuseu do Café

    It all starts with a seed. Coffee plants take many years to fully mature and are a challenge to cultivate, but the fruits of that labor are these cherries.

    Seduction (2019-06-14) by Gabriela Lavalle (photographer), Itzel Mendoza (floral artist), and Rafael Muñoz-Márquez (editor)Colectivo Rokunin

    When the cherries have ripened, they're harvested by workers in coffee-producing countries around the world. Most coffee grown is of the arabica variety, with robusta coffee making up the remaining 40%. 

    Coffee (1935) by Candido PortinariProjeto Portinari

    This 1935 painting by Brazilian painter Candido Portinari shows just some of the work that goes into each cup of the aromatic elixir.

    The coffee cherries are dried, sometimes in a machine like this, and the fruit is removed. The seeds inside are known as green coffee beans.

    Toasted coffee (2021-05-31) by Colectivo Rokunin and On The ShoreColectivo Rokunin

    The beans are then roasted, resulting in the rich, brown coffee we know and love.

    Coffee grinders from the First World War (2020) by Dubai CultureDubai Culture & Arts Authority

    The cup isn't done just yet. The beans must be ground first, sometimes by industrial machines and sometimes by hand at home.

    Now, with the fragrance already in the air, we can begin the brewing process--done most simply by pouring hot water over the coffee grounds.

    Coffee (1956) by Richard DiebenkornChrysler Museum of Art

    Moments later, we can enjoy a piping-hot, aromatic beverage. Coffee is served and prepared in almost innumerable ways.

    This opulent building is the Coffee House at Quirinale Palace in Rome. Imagine having a house just for coffee!

    Nighthawks (1942) by Edward Hopper (American, 1882-1967)The Art Institute of Chicago

    Day or night, hot or cold, people around the world love a cup of joe. How do you take your coffee?

    Credits: All media

    The story featured may in some cases have been created by an independent third party and may not always represent the views of the institutions, listed below, who have supplied the content.

    Stories from these collections


  2. Concurrency Freaks: 50 years later, is Two-Phase Locking the best we can do?

    Two phase locking (2PL) was the first of the general-purpose Concurrency Controls to be invented which provided Serializability. In fact, 2PL gives more than Serializability, it gives Opacity, a much stronger isolation l ... (12997 chars. See body)


    Two phase locking (2PL) was the first of the general-purpose Concurrency Controls to be invented which provided Serializability. In fact, 2PL gives more than Serializability, it gives Opacity, a much stronger isolation level.
    2PL was published in 1976, which incidentally is the year I was born, and it is likely that Jim Gray and his buddies had this idea long before it was published, which means 2PL first came to existence nearly 50 years ago.
    Jim Gray Endowed Scholarship | Paul G. Allen School of Computer Science &  Engineering

    After all that time has passed, is this the best we can do?

     Turns out no, we can do better with 2PLSF, but let's start from the begining

    When I use the term "general-purpose concurrency control" I mean an algorithm which allows access multiple objects (or records or tuples, whatever you want to name them) with an all-or-nothing semantics. In other words, an algorithm that lets you do transactions over multiple data items.
    Two-Phase Locking has several advantages over the other concurrency controls that have since been invented, but in my view there are two important ones: simplicity and a strong isolation level.

    In 2PL, before accessing a record for read or write access, we must first take the lock that protects this record.
    During the transaction, we keep acquiring locks for each access, and only at the end of the transaction, when we know that no further accesses will be made, can we release all the locks. Having an instant in time (i.e. the end of the transaction) where all the locks are taken on the data that we accessed, means that there is a linearization point for our operation (transaction), which means we have a consistent view of the different records and can write to other records all in a consistent way. It doesn't get much simpler than this.
    Today, this idea may sound embarrassingly obvious, but 50 years ago many database researchers thought that it was ok to release the locks after completing the access to the record. And yes, it is possible to do so, but such a concurrency control is not serializable.

    As for strong isolation, database researchers continue to invent concurrency controls that are not serializable, and write papers about it, which means Serializability is not that important for Databases. On the other hand, all transactional commercial databases that I know of, use 2PL or some combination of it with T/O or MVCC.

    Moreover, in the field of concurrency data structures, linearizability is the gold standard, which means 2PL is used heavily. If we need to write to multiple nodes of a data structure in a consistent way, we typically need something like 2PL, at least for the write accesses. The exception to this are lock-free data-structures, but hey, that's why (correct) lock-free is hard!

    Ok, 2PL is easy to use and has strong isolation, so this means we're ready to go and don't need anything better than 2PL, right?
    I'm afraid not. 2PL has a couple of big disadvantages: poor read-scalability and live-lock progress.

    The classic 2PL was designed for mutual exclusion locks, which means that when two threads are performing a read-access on the same record, they will conflict and one of them (or both) will abort and restart.
    This problem can be solved by replacing the mutual exclusion locks with reader-writer locks, but it's not as simple as this.
    Mutual exclusion locks can be implemented with a single bit, representing the state of locked or unlocked.
    Reader-writer locks also need this bit and in addition, need to have a counter of the number of readers currently holding the lock in read mode. This counter needs enough bits to represent the number of readers. For example, 7 bits means you can have a maximum of 128 threads in the system, in case they all decide to acquire the read-lock on a particular reader-writer lock instance.
    For such a scenario this implies that each lock would take 1 byte, which may not sound like much, but if you have billions of records in your DB then you will need billions of bytes for those locks. Still reasonable, but now we get into the problem of contention on the counter.

    Certain workloads have lots of read accesses on the same data, they are read-non-disjoint. An example of this is the root node of a binary search tree, where all operations need to read the root before they start descending the nodes of the tree.
    When using 2PL, each of these accesses on the root node implies a lock acquisition and even if we're using read-writer locks, it implies heavy contention on the lock that protects the root node.

    Previous approaches have taken a stab at this problem, for example TLRW by Dave Dice and Nir Shavit in SPAA 2010.
    By using reader-writer locks they were able to have much better performance than using mutual exclusion locks, but still far from what the optimistic concurrency controls can achieve.
    Take the example of the plot below where we have an implementation similar to TLRW with each read-access contending on a single variable of the reader-writer lock, applied to a binary search tree, a Rank-based Relaxed AVL. Scalability is flat regardless of whether we're doing mostly write-transactions (left side plot) or just read-transactions (right side plot).

    Turns out it is possible to overcome this "read-indicator contention" problem through the usage of scalable read-indicators. Our favorite algorithm is a reader-writer lock where each reader announces its arrival/departure on a separate cache line, thus having no contention for read-lock acquisition. The downside is that the thread taking the write-lock must scan through all those cache lines to be able to ascertain whether if the write-lock is granted, thus incurring a higher cost for the write-lock acquisition.
    As far as I know, the first reader-writer lock algorithms with this technique were shown in the paper "NUMA Aware reader-writer locks" of which Dave Dice and Nir Shavit are two of the authors, along with Irina Calciu, Yossi Lev, Victor Luchangco, and Virenda Marathe
    This paper shows three different reader-writer lock algorithms, two with high scalability, but neither is starvation-free.

    So what we did was take some of these ideas to make a better reader-writer lock, which also scales well for read-lock acquisition but has other properties, and we used this to implement our own concurrency control which we called Two-Phase Locking Starvation-Free (2PLSF).
    The reader-writer locks in 2PLSF have one bit per thread reserved for the read-lock but they are located in their own cache line, along with the bits (read-indicators) of the next adjacent locks.


    Like on the "NUMA-Aware reader-writer locks" paper, the cost shifts to the write-lock acquisition which needs to scan multiple cache lines to acquire the write-lock. There is no magic here, just trade-offs, but this turns out to be a pretty good trade-off as most workloads tend to be on the read-heavy side. Even write-intensive workloads spend a good amount of time executing read-accesses, for example, during the record lookup phase.
    With our improved reader-writer lock the same benchmark shown previously for the binary search tree looks very different:

    With this improved reader-writer lock we are able to scale 2PL even on read-non-disjoint workloads, but it still leaves out the other major disadvantage, 2PL is prone to live-lock.

    There are several variants of the original 2PL, some of these variants aren't even serializable, therefore I wouldn't call them 2PL anymore and won't bother going into that.
    For the classical 2PL, there are three variants and they are mostly about how to deal with contention. They're usually named:
        - No-Wait
        - Wait-Or-Die
        - Deadlock-detection

    When a conflict is encountered, the No-Wait variant aborts the self transaction (or the other transaction) and retries again. This retry can be immediate, or it can be later, based on an exponential backoff scheme. The No-Wait approach has live-lock progress because two transactions with one attempting to modify record A and then record B, while the other is attempting to modify record B and then record A, may indefinitely conflict with each other and abort-restart without any of them ever being able to commit.

    The Deadlock-Detection variant keeps an internal list of threads waiting on a lock and detects cycles (deadlocks).
    This is problematic for reader-writer locks because it would require each reader to have its own list, which itself needs a (mutual exclusion) lock to protect it. And detecting the cycles would mean scanning all the readers' lists when the lock is taken in read-lock mode.
    Theoretically it should be possible to make this scheme starvation-free, but it would require using starvation-free locks, and as there is no (published) highly scalable reader-writer lock with starvation-free progress, it kind of defeats the purpose. Moreover, having one list per reader may have consequences on high memory usage. Who knows, maybe one day someone will try this approach.

    The Wait-Or-Die variant imposes an order on all transactions, typically with a timestamp of when the transaction started and, when a lock conflict arises, decides to wait for the lock or to abort, by comparing the timestamp of the transaction with the timestamp of the lock owner. This works fine for mutual exclusion locks as the owner can be stored in the lock itself using a unique-thread identifier, but if we want to do it for reader-writer locks then a thread-id would be needed per reader.
    If we want to support 256 threads then this means we need 8 bits x 256 = 256 bytes per reader-writer lock. Using 256 bytes per lock is a hard pill to swallow!

    But memory usage is not the real obstacle here. The Wait-Or-Die approach implies that all transactions have a unique transaction id so as to order them, for example, they can take a number from an atomic variable using a fetch_and_add() instruction.
    The problem with this is that on most modern CPUs you won't be able to do more than 40 million fetch_and_add() operations per second on a contended atomic variable. This may seem like a lot (Visa does about 660 million transactions per day, so doing 40 million per second sounds pretty good), but when it comes to in-memory DBMS it's not that much, and particularly for concurrent data structures is a bit on the low side.
    Even worse, this atomic fetch_and_add() must be done for all transactions, whether they are write-transactions or read-transactions.
    For example, in one of our machines it's not really possible to go above 20 M fetch_and_add() per second, which means that scalability suckz:

    To put this in perspective, one of my favorite concurrency controls is TL2 which was invented by (surprise!) none other than Dave Dice, Nir Shavit and Ori Shalev
    I hope by now you know who are the experts in this stuff  ;)

    Anyways, in TL2 the read-transactions don't need to do an atomic fetch_and_add(), and they execute optimistic reads, which is faster than any read-lock acquisition you can think of. At least for read-transactions, TL2 can scale to hundreds of millions of transactions per second. By comparison, 2PL with Wait-Or-Die will never be able to go above 40 M tps/sec.
    This means if high scalability is your goal, then you would be better off with TL2 than 2PL… except, 2PLSF solves this problem too.

    In 2PLSF only the transactions that go into conflict need to be ordered, i.e. only these need to do a fetch_and_add() on a central atomic variable. This has two benefits: it means there is less contention on the central atomic variable that assigns the unique transaction id, and it means that transactions without conflicts are not bounded by the 40 M tps plateau.
    This means that we can have 200 M tps running without conflict and then 40 M tps that are having conflict, because the conflicting transactions are the only ones that need to do the fetch_and_add() and therefore, and the only ones bounded by the maximum number of fetch_and_adds() the CPU can execute per second.
    On top of this, the 2PLSF algorithm provides starvation-freedom.

    Summary

    In this post we saw some of the advantages and disadvantages of 2PL and some of the variants of 2PL.
    We explained what it takes to scale 2PL: make a better reader-writer lock.
    But the big disadvantage of 2PL is the live-lock progress, which some variants could seemingly resolve, but in practice they don't because they will not scale, even with a better reader-writer lock.
    Then we described 2PLSF, a novel algorithm invented by me, Andreia and Pascal Felber to address these issues.

    In summary, 2PLSF is what 2PL should have been from the start, a concurrency control that scales well even when reads are non-disjoint and that provides starvation-free transactions, the highest form of blocking progress there is.
    Moreover, it's pretty good a solving certain kinds of conflicts, which means it can have scalability even some conflicts arise. 2PLSF is not perfect, but it's good enough, certainly better than TL2 when it comes to solving conflicts.

    Despite being two-phase locking, it's as close to 2PL as a jackhammer is to a pickaxe. 

    2PLSF is not your grandfather's 2PL



  3. The Best Side Hustles With Little or No Start-Up Costs

    Get your side hustle off the ground at basically no starting costs—just a little savvy. ... (7506 chars. See body)


    We may earn a commission from links on this page.

    The Best Side Hustles With Little or No Start-Up Costs

    Credit: Viktoriia Hnatiuk - Shutterstock


    You know the grind never stops; but how do you get the grind to start? In addition to my main job guiding you through the ins and outs of personal finance here at Lifehacker, I’m also hustling. Every year I can depend on additional money from humor writing, copy editing, voice-over work, and even a pet sitting gig or two.

    It’s difficult to gauge how much extra income you can actually earn from a side hustle, since payment varies wildly depending on your expertise, location, and amount of time you have. It’s also important to mention that the most lucrative side hustles require a hefty investment up front, especially if you’re trying get a small business off the ground. Similarly, many “be your own boss” side hustles require having wealth in the first place, like already owning property in order to be a fruitful Airbnb host.

    Luckily, plenty of side hustles require little to no starting costs. Let’s take a look at some of the most promising side gigs today, how you can get started, and what you can reasonably expect to get out of it.

    Note: All the average hourly pay estimates come from Payscale; you can read more about how they crowdsource and validate salary data here.

    Online side hustles with low starting costs

    According to Side Hustle Nation, the most popular ways to make money on the side are often all done online (a fact that only became increasingly true over the pandemic).

    Virtual assistant

    If you have experience with administrative support around the office, you can turn that into freelance virtual assistant (VA) work.

    What you put into it: A virtual assistant typically does general admin work like answering emails, booking appointments, managing calendars, and the like. More experienced VAs might also do customer support, data entry, social media management, and a range of other remote office-related tasks.

    What you can get out of it: VAs can set their own hours and pay rate and choose the clients they want to work with. The average hourly pay for a Virtual Assistant is $17.

    Tip: Create a profile that showcases your specialities and browse freelance gig sites like FlexJobs and Upwork.

    Transcribing audio and video

    What you put into it: Basic typing skills and tech savvy, usually one-three hours per gig (but this can vary greatly).

    What you can get out of it: Transcribing averages at about $15 per hour.

    Tip: You don’t need much to get good at transcribing audio and video, but it helps to have high-speed internet and an especially good pair of headphones.

    Editorial services

    Plenty of people out there are looking for a second pair of eyes on their writing, whether that means web content, research papers, business presentations, legal documents, and more.

    What you put into it: Proofreading skills—and proof of your editorial experience so you actually get gigs from sites like freelancing site Fiverr, FlexJobs, and Upwork.

    What you can get out of it: According to Fiverr, the average rate for a full-fledged, completed project is $50-$75. Most writers on Upwork charge $0.01–$0.40 per word.

    Tip: You need to stand out among the thousands of other freelancers vying for a quick, well-paying proofreading job. Use your social media channels to self-promote and rack up positive reviews and testimonials.

    E-commerce

    What you put into it: Several hours doing market research, taking good photos and writing descriptions for items, handling buyer interactions, and then physically packaging and shipping items to buyers.

    What you can get out of it: Income varies depending on item; you have tons of flexibility in terms of what, how much, and when you sell.

    Tip: Good customer service can turn you into a top seller; conversely, one negative review can have a (potentially unfair) negative impact on your sales. Here are some of the best sites to sell all your old stuff.

    Online surveys

    Examples: Paid research studies, online focus groups, anything on Survey Junkie.

    What you put into it: Not a lot! Most surveys will take about 20 minutes of your time, and involve little to no real effort.

    What you can get out of it: Not a lot! Think $2-5 per hour. Considering these can be mindless and you can do them whenever you have downtime, they’re a little pocket change.

    Tip: Never, ever pay money to join a survey-taking site. That’s a sure sign of a scam.

    In-person side hustles with low starting costs

    Consider some of the following odd jobs available to you out there in the real world.

    Dog walking

    What you put into it: With dog walking, ideally you have some animal experience (and love!) as well as the ability and desire to get your steps in.

    What you can get out of it: HomeGuide reports the following average rates for dog walkers, depending on the additional pet care provided and the sort of clientele you’re working with:

    • Low-end dog walking services: $10 per 30-minute walk

    • Mid-range dog walking services: $20 per 30-minute walk

    • High-end dog walking services: $35 or more per 30-minute walk

    Like with all the side hustles here, how you advertise your services and your flexibility with your business will greatly impact how much money you get to take home—and that’s after dog walking apps take a cut of your earnings.

    Tip: To find reliable clients in your area, I’d suggest making an account with Rover. Once you have a routine, pitch the idea to establish your business with them off-app.

    Professional organizing

    What you put into it: Organizational skills and the ability to transport to your clients. Go big with all the types of organizing services you can provide: Offices, garages, and kitchens all require different kinds of re-arranging and decluttering. You should also probably brush up on some Marie Kondo.

    What you can get out of it: The average hourly pay for a professional organizer/consultant is around $20-$25.

    Tip: Advertise your services in different types of local community groups on Facebook (so long as it’s permitted by group moderators). Another way to spread the word about your services is to publish a Craigslist service ad for only about $5.

    Mobile notary services

    What you put into it: Before you start traveling to people’s homes and offices to notarize their documents, you do need to meet your state’s specific requirement for becoming a Mobile Notary. The application may be closer to $40-50, which is over the promised $20, but I’m still including it in this round-up because of how reliable and high-earning becoming a mobile notary can be.

    What you can get out of it: Notaries typically charge $75-200 per appointment.

    Tip: Yet again, how you market yourself directly impacts how much business you get. Once you get your mobile notary business up and running, consider ways to expand your offerings, such as I-9 and apostille services.

    Oddly specific odd jobs

    Examples: Resume editing, voice-over artist, cleaning car interiors, knife-sharpening, pet waste removal, dating app ghostwriter—you get the picture.

    What you put into it: A specific set of skills. This one totally depends on you.

    What you can get out of it: Pay varies greatly. If you have expertise in a specific subject, you might earn $20 per hour as an online tutor. But if you’re, say, an experienced graphic designer, you may can set rates at several hundred dollars per project.

    Tip: Think about what sets you apart and get creative with the sort of services you can provide. Create a profile that showcases your specialities and browse freelance gig sites like Editorr, FlexJobs, and Upwork.


Random expired ±

2 links selected from 420 expired links

Expired ±

420 links expired today Sun, Jan.21, 2024
  1. What's in a Good Error Message?

    In a way, an error message tells a story; and as with every good story, you need to establish some context about its general settings. For an error message, this should tell the recipient what the code in question was tr (Open link)

  2. https://rust-book.cs.brown.edu/

    Welcome to the Rust Book experiment, and thank you for your participation! First, we want to introduce you to the new mechanics of this experiment. The main mechanic is quizzes: each page has a few quizzes about the pag (Open link)

  3. Shields Down

    Resignations happen in a moment, and it’s not when you declare, “I’m resigning.” The moment happened a long time ago when you received a random email from a good friend who asked, “I know you’re really happy with your cu (Open link)

  4. What's in a Good Error Message? - Gunnar Morling

    In a way, an error message tells a story; and as with every good story, you need to establish some context about its general settings. For an error message, this should tell the recipient what the code in question was tr (Open link)

  5. Let's talk SkipList

    BackgroundSkipLists often come up when discussing “obscure” data-structures but in reality they are not that obscure, in fact many of the production grade softwares actively use them. In this post I’ll try to go into Ski (Open link)

  6. You and your mind garden

    In French, “cultiver son jardin intérieur” means to tend to your internal garden—to take care of your mind. The garden metaphor is particularly apt: taking care of your mind involves cultivating your curiosity (the seeds (Open link)

  7. here

    Sample code and instructions for steps through different container image build options. - GitHub - maeddes/options-galore-container-build: Sample code and instructions for steps through different container image build op (Open link)

  8. User space

    For the term "user space" as used in Wikipedia, see Wikipedia:User pages. "Kernel space" redirects here. For the mathematical definition, see Null space. This article needs additional citations for verification. Please h (Open link)

  9. Convey

    2022 February 01 16:21 stuartscott 1473754¤ 1240149¤ You may have noticed that the January edition of the Convey Digest looks a little different from the previous ones - the color scheme is now based on the dominant (Open link)

  10. Diátaxis

    The Diátaxis framework solves the problem of structure in technical documentation, making it easier to create, maintain and use. (Open link)

  11. Maintaining a medium-sized Java library in 2022 and beyond

    scc --exclude-dir docs/book ─────────────────────────────────────────────────────────────────────────────── Language Files Lines Blanks Comments Code Complexity ──────────────────────────────── (Open link)

  12. Manhattan Phoenix review: epic history of how New York was forged by fire – and water | Books | The Guardian

    Daniel Levy pins the great fire of 1835 as the birth event of modern Manhattan in a tale as teeming as the city itself (Open link)

  13. The Book

    This is the story of Simon Wardley. Follow his journey from bumbling and confused CEO lost in the headlights of change to someone with a vague idea of what they're doing. (Open link)

  14. The Four Innovation Phases of Netflix’s Trillions Scale Real-time Data Infrastructure | by Zhenzhong Xu | Feb, 2022 | Medium

    The blog post will share the four phases of Real-time Data Infrastructure’s iterative journey in Netflix (2015-2021). For each phase, we will go over the evolving business motivations, the team’s unique challenges, the (Open link)

  15. Beyond Microservices: Streams, State and Scalability

    Gwen Shapira talks about how microservices evolved in the last few years, based on experience gained while working with companies using Apache Kafka to update their application architecture. (Open link)

  16. In Praise Of Anchovies: If You Don’t Already Love Them, You Just Haven’t Yet Discovered How Good They Can Be

    For many people, anchovies are one of those foods to be avoided like the plague. But for Ken Gargett anchovies are not a love-it-or-hate it food. Rather, they are a love-it-or-you-have-not-discovered-how-good-they-can-be (Open link)

  17. Owen Pomery

    OWEN D. POMERY WORK SHOP ABOUT + CONTACT EDITORIAL RELIEFS EDITION AIRBNB FLAT EYE ERNEST POHA HOUSE CONCEPT SENET VICTORY POINT KIOSK SCI-FI SPOT ILLUSTRATIONS GAME OF THRONES (GAME) NARRATIVE ARCHIT (Open link)

  18. The Go Memory Model

    Table of ContentsIntroductionAdviceInformal OverviewMemory ModelImplementation Restrictions for Programs Containing Data RacesSynchronizationInitializationGoroutine creationGoroutine destructionChannel communicationLocks (Open link)

  19. Optimizing Distributed Joins: The Case of Google Cloud Spanner and DataStax Astra DB | by DataStax | Building the Open Data Stack | Medium

    In this post, learn how relational and NoSQL databases, Google Cloud Spanner and DataStax Astra DB, optimize distributed joins for real-time applications. Distributed joins are commonly considered to… (Open link)

  20. https://programmerweekly.us2.list-manage.com/track/click?u=72f68dcee17c92724bc7822fb&id=c6a9958764&e=d7c3968f32

    Ever since I started to work on the Apache APISIX project, I’ve been trying to improve my knowledge and understanding of REST RESTful HTTP APIs. For this, I’m reading and watching the following sources: Books. At the mom (Open link)

  21. Top 10 Architecture Characteristics / Non-Functional Requirements with Cheatsheet | by Love Sharma | Jun, 2022 | Dev Genius

    Imagine you are buying a car. What essential features do you need in it? A vehicle should deliver a person from point A to point B. But what we also check in it is Safety, Comfort, Maintainability… (Open link)

  22. 7 days ago

    This article has a large gap in the story: it ignores sensor data sources, which are both the highest velocity and highest volume data models by multiple orders of magnitude. They have become ubiquitous in diverse, mediu (Open link)

  23. The Kaizen Way

    Are you looking for a new approach to health? Do you want to finally get the results you have been hoping for? How do you find a practitioner that is willing to try a different approach and guide you through your journey (Open link)

  24. The Grug Brained Developer

    Introduction this collection of thoughts on software development gathered by grug brain developer grug brain developer not so smart, but grug brain developer program many long year and learn some things although mostly s (Open link)

  25. A Better Way to Manage Projects

    The GOVNO framework is a novel approach to project management that aims to improve upon the shortcomings of the popular scrum methodology. Each letter of the acronym represents a key aspect of the framework: G: Governan (Open link)